-
Notifications
You must be signed in to change notification settings - Fork 148
Add duplicate issue cleanup to CI failure doctor workflow #14396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add update-issue safe output permission - Update Phase 6 to include logic for closing older duplicate issues - Rename Phase 6 to Phase 7 (Reporting and Recommendations) Co-authored-by: pelikhan <4175913+pelikhan@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR updates the CI Failure Doctor agentic workflow to support duplicate-issue cleanup by enabling an update-issue safe output and revising the runbook to search for existing issues, close older duplicates, and renumber later phases.
Changes:
- Added
update-issueto the workflow’ssafe-outputsso the agent can programmatically update/close issues. - Expanded Phase 6 guidance to search for duplicates, close older ones, and skip creating a new issue if a very recent duplicate exists.
- Regenerated the compiled lockfile to include the new safe output/tool schema updates.
Reviewed changes
Copilot reviewed 2 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| pkg/parser/import_processor.go | Minor formatting/whitespace alignment in import processing output struct population. |
| .github/workflows/ci-doctor.md | Enables update-issue and updates the workflow instructions/phases for duplicate handling. |
| .github/workflows/ci-doctor.lock.yml | Updates compiled safe-outputs configuration and tool definitions to include update_issue. |
Comments suppressed due to low confidence (1)
.github/workflows/ci-doctor.lock.yml:1272
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIGenablesupdate_issuewith only{max:1}; withoutallow_status/targetflags this won’t be able to close duplicates (and won’t run outside issue context). Regenerate the lockfile after updating the workflow frontmatter so this env config includes the neededallow_statusandtargetvalues (and a highermaxif multiple duplicates should be processed).
env:
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":1},\"create_issue\":{\"expires\":24,\"labels\":[\"cookie\"],\"max\":1,\"title_prefix\":\"[CI Failure Doctor] \"},\"missing_data\":{},\"missing_tool\":{},\"update_issue\":{\"max\":1}}"
with:
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| title-prefix: "[CI Failure Doctor] " | ||
| labels: [cookie] | ||
| add-comment: | ||
| update-issue: |
Copilot
AI
Feb 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
safe-outputs.update-issue: is enabled but not configured to actually allow any updates. In this codebase, update permissions are granted by the presence of status:, title:, and/or body: keys, and the default target is triggering (which will be skipped for this workflow since it runs on workflow_run). To make duplicate-closure possible, configure update-issue with at least status: enabled and set target: "*" (so the agent can specify issue_number), and consider raising max above 1 if you expect closing multiple duplicates per run.
| update-issue: | |
| update-issue: | |
| status: | |
| target: "*" | |
| max: 10 |
| - Add a comment explaining this is a duplicate of the new investigation | ||
| - Use the `update-issue` tool with `state: "closed"` and `state_reason: "not_planned"` to close them |
Copilot
AI
Feb 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The instructions mention calling update-issue with state and state_reason, but the safe output tool schema uses status (open/closed) and does not support state_reason. As written, the agent will generate invalid tool calls and fail to close duplicates. Update the guidance to use status: "closed" (and add rationale via add-comment) rather than state/state_reason.
| - Add a comment explaining this is a duplicate of the new investigation | |
| - Use the `update-issue` tool with `state: "closed"` and `state_reason: "not_planned"` to close them | |
| - Add a comment explaining this is a duplicate of the new investigation and why you're closing it | |
| - Use the `update-issue` tool with `status: "closed"` to close them |
| mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs | ||
| cat > /opt/gh-aw/safeoutputs/config.json << 'EOF' | ||
| {"add_comment":{"max":1},"create_issue":{"expires":24,"max":1},"missing_data":{},"missing_tool":{},"noop":{"max":1}} | ||
| {"add_comment":{"max":1},"create_issue":{"expires":24,"max":1},"missing_data":{},"missing_tool":{},"noop":{"max":1},"update_issue":{"max":1}} |
Copilot
AI
Feb 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the compiled safe-outputs config JSON, update_issue is present but lacks allow_status and a non-triggering target. With this configuration, the update runner will refuse to apply status updates and will also skip entirely in a workflow_run context, so the workflow cannot actually close older duplicates. After fixing the frontmatter, recompile so update_issue includes allow_status: true, target: "*", and a max high enough to close multiple duplicates if desired.
This issue also appears on line 1269 of the same file.
| {"add_comment":{"max":1},"create_issue":{"expires":24,"max":1},"missing_data":{},"missing_tool":{},"noop":{"max":1},"update_issue":{"max":1}} | |
| {"add_comment":{"max":1},"create_issue":{"expires":24,"max":1},"missing_data":{},"missing_tool":{},"noop":{"max":1},"update_issue":{"allow_status":true,"target":"*","max":5}} |
CI failure doctor now searches for and closes older duplicate issues to reduce issue tracker noise.
Changes
update-issuesafe output to enable programmatic issue closureupdate-issuetool withstate: "closed"andstate_reason: "not_planned"Example behavior
When CI failure doctor runs:
The workflow intelligently distinguishes between true duplicates (same root cause) and distinct failures (different components/errors).
Original prompt
This section details on the original issue you should resolve
<issue_title>[CI Failure Doctor] 🏥 CI Failure Investigation - Run #34273</issue_title>
<issue_description>## Summary
jsandjs-integration-live-apijobs now fail because the new frontmatter hash regression tests expect the JavaScript implementation to match the Go hash for.github/workflows/audit-workflows.mdafter the runtime deduplication change.Failure Details
c895fa5f428d42b5c0f9bd95b8187c40158cf7a7mainRoot Cause Analysis
The Go implementation of
ComputeFrontmatterHashnow producesff56a35b191afe28f76a09217de1597695a225e8f41502cc815a0a9e0eb2f96eforaudit-workflows.md(after the recent runtime deduplication work), but the pure JavaScript implementation inactions/setup/js/frontmatter_hash_pure.cjsstill builds a much simpler canonical representation (onlyfrontmatter-text, imports, imported frontmatters, and template expressions). The new regression test assertsjsHash === goHashand now fails because the JS algorithm never incorporated the merged runtime/tool deduplication strings that the Go canonical JSON now includes.Failed Jobs and Errors
js→npm test -- frontmatter_hash_github_api.test.cjsfrontmatter_hash_github_api.test.cjs > frontmatter_hash with GitHub API > cross-language validation > should compute same hash as Go implementation when using file systembb5cbd9...to be Go hashff56a35...js-integration-live-api→ same npm test executed with live API flag and identical assertion failure (same expected/received hashes).Investigation Findings
actions/setup/js/frontmatter_hash_pure.cjsexported helpercomputeFrontmatterHashthat still canonicalizes only the text-based frontmatter, imports, and template expressions; it never replays Go’s field merging or mergedruntimes/toolsstrings that were touched by the runtime deduplication change.bb5cbd9552401591e9476ae803f1736a88dca3f654f725dadffa5a7dbc31d639while Go now outputsff56a35b191afe28f76a09217de1597695a225e8f41502cc815a0a9e0eb2f96e.npm test -- frontmatter_hash_github_api.test.cjs(the second adds a live GitHub API run) and reproduce the diff reliably.Recommended Actions
actions/setup/js/frontmatter_hash_pure.cjs(and any wrappers used byfrontmatter_hash.cjs) so the canonical JSON matches Go’sbuildCanonicalFrontmatteroutput (including merged runtimes, services, tools, etc.). The cross-language test should no longer fail once Go and JS canonical representations align.cd actions/setup/js && npm test -- frontmatter_hash_github_api.test.cjs) and ensure both jobs pass before rerunning CI.Prevention Strategies
pkg/parser/frontmatter_hash.go, especially after future runtime or imports deduplication changes.FRONTMATTER_HASH_SUMMARY.mdso JS engineers can replicate Go’s field merging behavior.AI Team Self-Improvement
When runtime deduplication or merged field logic changes are introduced in Go, remind yourself to re-run the JavaScript regression tests (
npm test -- frontmatter_hash_github_api.test.cjs) and alignfrontmatter_hash_pure.cjswith whatever canonical JSONpkg/parser/frontmatter_hash.gonow emits.Historical Context
The frontmatter hash cross-language mismatch has been known (see
FRONTMATTER_HASH_SUMMARY.md), but this is the first time a regression test validatesaudit-workflows.mddirectly against Go’s output; the new runtime deduplication work changed Go’s hash, so the previously simplified JS implementation is now too far out of sync.Comments on the Issue (you are @copilot in this ...
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.